Skip to content

feat: add MiniMax as LLM provider with Guardian threat protection#1

Open
octo-patch wants to merge 1 commit intoOraclesTech:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider with Guardian threat protection#1
octo-patch wants to merge 1 commit intoOraclesTech:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a first-class provider integration for Guardian SDK. MiniMax offers powerful LLM models (M2.7, M2.5) through an OpenAI-compatible API at https://api.minimax.io/v1. This provider wraps MiniMax-configured OpenAI clients with the same multi-layer threat detection pipeline used for OpenAI and Anthropic.

Changes

  • New provider: ethicore_guardian/providers/minimax_provider.pyMiniMaxProvider, ProtectedMiniMaxClient, ProtectedChat, ProtectedCompletions, and create_protected_minimax_client() convenience factory
  • Auto-detection: Updated get_provider_for_client() in base_provider.py to detect MiniMax clients by checking base_url for 'minimax'
  • Dependencies: Added minimax optional dependency group in pyproject.toml (uses openai>=1.0.0)
  • Documentation: Added MiniMax provider example and install instructions to README
  • Tests: 30 tests in tests/test_minimax.py — 22 unit tests + 5 integration tests + 3 constant tests

Supported Models

Model Context
MiniMax-M2.7 1M tokens
MiniMax-M2.7-highspeed 1M tokens (fast)
MiniMax-M2.5 204K tokens
MiniMax-M2.5-highspeed 204K tokens (fast)

Usage

import openai
from ethicore_guardian import Guardian, GuardianConfig
from ethicore_guardian.providers.minimax_provider import MiniMaxProvider

guardian = Guardian(config=GuardianConfig(api_key="my-app"))

minimax_client = openai.OpenAI(
    api_key="your-minimax-api-key",
    base_url="https://api.minimax.io/v1",
)

provider = MiniMaxProvider(guardian)
client = provider.wrap_client(minimax_client)

response = client.chat.completions.create(
    model="MiniMax-M2.7",
    messages=[{"role": "user", "content": user_input}]
)

Test plan

  • All 30 unit + integration tests pass with mocked Guardian
  • Verify no regressions in existing OpenAI/Anthropic providers
  • Manual smoke test with real MiniMax API key (optional)

Add MiniMax (https://www.minimax.io) as a first-class provider integration
for Guardian SDK. MiniMax offers powerful LLM models (M2.7, M2.5) through
an OpenAI-compatible API, and this provider wraps MiniMax-configured OpenAI
clients with the same threat detection pipeline used for OpenAI and Anthropic.

Changes:
- Add minimax_provider.py with MiniMaxProvider, ProtectedMiniMaxClient,
  and create_protected_minimax_client() convenience factory
- Add MiniMax auto-detection in get_provider_for_client() via base_url
- Add minimax optional dependency group in pyproject.toml
- Add MiniMax provider example and install instructions to README
- Add 30 tests (22 unit + 5 integration + 3 constant tests)
@OraclesTech
Copy link
Copy Markdown
Owner

Hey @octo-patch, really appreciate you taking the time to contribute to the Guardian SDK! It's clear you read the codebase carefully: the proxy structure, async/sync handling, and docstrings all follow the existing patterns closely, and the mock-based test suite is on par. Great first PR.

Before we merge, there are two things that need to be addressed:

guardian.py was not updated
get_provider_for_client() will correctly identify a MiniMax client, but guardian.wrap() then looks up self.providers['minimax'], which doesn't exist because _setup_providers() was never updated to register the new provider. This will raise a KeyError at runtime, making the provider unreachable through the public API. You'll need to add the registration there alongside the existing OpenAI/Anthropic entries.

Fail-open on empty prompt (minimax_provider.py, line 283)
The if prompt_text and prompt_text.strip(): guard means if extract_prompt() returns None or an empty string, due to a malformed payload or an unusual message format, threat analysis is skipped entirely and the request passes through unblocked. This is fail-open, which goes against the core security contract of Guardian SDK (Principle 14). The existing providers don't have this escape hatch. Either raise on an empty prompt or treat it as a challenge, but never silently allow.

Two smaller things worth a follow-up (not blocking):

The base_url string match works here since it's nested inside the OpenAI module check, but a comment explaining the reasoning would help future maintainers.
The attribute-copying loop in ProtectedCompletions.init diverges from the lazy getattr delegation pattern used in the OpenAI and Anthropic providers... worth aligning for consistency.
Fix the two blocking issues and this is good to go. Thanks again for the contribution brother, looking forward to adding MiniMax support into Guardian SDK!

P.S sorry this took a little bit getting back to you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants